Round Overview
Discuss this match
SRM 612 was brought to you by Cricicle (division 2 easy and division 1 medium) and EmK (rest of the tasks). Division 1 coders needed to solve an easy somehow standard for TopCoder (a search problem in which you need to be careful not to miss any possible case). Petr needed only 3 minutes to solve it correctly. The medium was a challenging dynamic programming problem. It didn’t stop Egor to get it done in 17 minutes. The hardest problem of the match featured probability theory, which turns out soon to be reduced to Gaussian elimination in O(n ^ 6). By a trick, Gaussian elimination is reduced to O(n ^ 3). Also, some contestants fount some clever simulations to do in order to solve this problem. Congratulations to socketnaut for solving this problem very fast! Overall, match was won by Egor, which also used intended solution for 1000 pointer task. Second place was pieguy, which is now very close to become a new Algorithm target and third place was won by lunae_. Division 2 match was a very closed race in order to get a top spot. Congratulations to Jozik for winning the match and welcome to Division 1!
Used as: Division Two - Level One:
Value | 250 |
Submission Rate | 1118 / 1173 (95.31%) |
Success Rate | 1081 / 1118 (96.69%) |
High Score | thebomb556 for 249.56 points (1 mins 12 secs) |
Average Score | 223.45 (for 1081 correct submissions) |
This problem tested whether you could convert an integer into a string. Many solved this using standard library methods. You also needed to be a little careful - A was always non-negative, so should always be included in the output, and 0 is also non-negative.
Java solution:
1
2
3
4
5
6
7
8
String makeMicroString(int A, int D) {
String micro = "";
while (A >= 0) {
micro += A;
A -= D;
}
return micro;
}
C++ solution:
1
2
3
4
5
6
7
8
string makeMicroString(int A, int D) {
string micro = "";
while (A >= 0) {
micro += to_string(A);
A -= D;
}
return micro;
}
MinimumSquareEasy
Problem Details
Used as: Division Two - Level Two:
Value | 500 |
Submission Rate | 431 / 1173 (36.74%) |
Success Rate | 318 / 431 (73.78%) |
High Score | Artyom89 for 471.71 points (7 mins 2 secs) |
Average Score | 291.73 (for 318 correct submissions) |
The area of a square is l * l, where l is the length of the square. Since f(l) = l * l is a monotonic function, minimizing the area is the same as minimizing l.
The square we need to find needs to contain at least N - 2 of the given points. This means at most 2 points are NOT contained. So we need to calculate minimum length of the square in 3 cases:
(1) no point is deleted from the input
(2) one point is deleted from the input
(3) two points are deleted from the input
Suppose for a set of points the optimal lenght of square is L. We delete one point from the set. L will never increase, it will remain the same or will decrease. This means minimal length is always achieved considering only 3rd case.
Since number of points is up to 50, we can simply iterate the two points we’re deleting. Now you have to solve the subproblem: what’s minimal lenght of square if all points except those deleted need to be strictly inside the square?
We can uniquely define a square by its lower left corner (the point with minimal x value, if tie with minimal y value) and it’s length. Suppose the optimal square for our set of points we check have lower left corner on coordinates (xl, yl) and its length equal to L. Each point (x, y) in the set needs to be strictly inside this square.
The natural question is: how to check if a point is inside a given square? On general case, this question requires some geometry skills. But all squares in this problem have a particularity: they have both sides paralel with axis. Taking some examples on paper, one can see that a point (x, y) is included in the square if and only if:
xl < x < xl + L
yl < y < yl + L
Second part of inequalities give a restriction for L:
L > x - xl
L > y - yl
We get that L > max(x - xl, y - yl). Since we want L to be as small as possible, we’ll take L as max(x - xl, y - yl) + 1. Also, we need max(x - xl, y - yl) to be as small as possible. In worst case, we’ll have to consider 2 points: point (xmax, y) and (x, ymax), those are points having maximum x, respectively y coordinate. We can’t skip those points, we have to take them into account, so our expression becomes now max(xmax - xl, ymax - yl). Happily, we can still make this expression as small as possible, by choosing xl and yl to be as big as possible.
Coming back to inequalities, we know that xl < x and yl < y. Again, this means xl < xmin and yl < ymin, where (xmin, y) and (x, ymin) are points having mimimum x, respectively y coordinate. As xl and yl need to be as big as possible, we pick xl = xmin - 1 and yl = ymin - 1.
So summing up the explanation, L = max(xmax - xmin + 1, ymax - ymin + 1) + 1 = max(xmax - xmin, ymax - ymin) + 2. All we need to do is to calculate values xmax, ymax, xmin and ymin, then calculate current L and keep minimum L as Lmin. The answer of the problem is Lmin * Lmin. Iterating the two deleted points takes O(n ^ 2) time, where n is number of points from input. For each fixed points, calculating xmax, xmin, ymax, ymin takes O(n) time. So complexity is O(n ^ 3).
TorusSailingEasy
Problem Details
Used as: Division Two - Level Three:
Value | 1000 |
Submission Rate | 62 / 1173 (5.29%) |
Success Rate | 37 / 62 (59.68%) |
High Score | Jozik for 757.37 points (17 mins 16 secs) |
Average Score | 523.22 (for 37 correct submissions) |
For beginning, let’s try to find all cells that Fox Ciel can reach, no matter with how small probability they are reached. Fox Ciel starts from (0, 0) and can go either to (1, 1) or (n - 1, m - 1). Suppose he has chosen cell (1, 1). From it he can go either to (0, 0) or (2 mod n, 2 mod m). But (0, 0) is already considered, so there’s no point to consider it again. Let’s generalize. We are currently at cell (x, y) and we go to cell ((x + 1) mod n, (y + 1) mod m). From cell ((x + 1) mod n, (y + 1) mod m), there’s no point to go back to cell (x, y), so it will choose only cell ((x + 2) mod n, (y + 2) mod m). We stop the walking when we get back to cell (0, 0).
These are reachable cells when from (0, 0) we’ve gone to (1, 1). What if we go to (n - 1, m - 1) instead? This cell was already visited. Why? The only way we could return to (0, 0) in previous case is to arrive to (n - 1, m - 1), then to make a step to ((n - 1 + 1) mod n, (m - 1 + 1) mod m). Similarly, cell (n - 2, m - 2) was already visited, by same reasons. So, the only reachable cells are the ones at form (i mod n, i mod m), with i >= 0. We’ll do a simulation to get all cells. Suppose we denote by cnt the minimum integer such as cnt + 1 = 0 (mod n) and cnt + 1 = 0 (mod m). Obviously, only cells of form (i mod n, i mod m) with 0 <= i <= cnt will be unique reachable cells.
Let’s imagine the torus as a graph with cnt + 1 vertices (numbered 0, 1, …, cnt). There’s an undirected edge between vertex x and vertex (x + 1) mod cnt. During simulation, we can find a number goal such as (goal mod n, goal mod m) = (goalX, goalY). If this number does not exist, return -1. The problem is: we start from vertex 0 and randomly walk to one of its neighbors with equal probability. What’s expected value we reach vertex goal for the first time?
Those experienced with probability theory will immediately recognize this problem: it’s called “hitting time”. There’s an explicit formula for this problem, which can be fount on internet: (cnt - goal) * goal. A lot of contestants used this approach during contest. Next, I’ll describe an approach that requires minimal knowledge.
Suppose we calculate E[node] = expected number of days to arrive in node for first time. Can we find a recurrence for E? We start from definition of expected value: expected value = sum of probability * value. In this case, in node x we can arrive from node x - 1 and node (x + 1), with equal probabilities, equal to 1 / 2 (assume that nodes are applied according modulo, such as each node is between 0 and cnt). Values we get in each cases are E[x - 1] + 1 (expected numbers of days needed to go in x - 1, plus one day needed to move to node x), respectively E[x + 1] + 1. So, the recurrence relation for E[x] is E[x] = (E[x - 1] + E[x + 1]) / 2 + 1. Also, E[0] = 0, because we always “arrive” in day 0 in vertex 0.
For each 0 < x < cnt, we have E[x] = (E[x - 1] + E[x + 1]) / 2 + 1. Also, we know that E[0] = (E[1] + E[cnt]) / 2 + 1 = 0. Also, E[cnt] = (E[cnt - 1] + E[0]) / 2 + 1 = E[cnt - 1] / 2 + 1. We need to solve this system of equations. One possible way is to express each term in form: E[i] = x * E[1] + y. We can do this, because E[1] will never be equal to 0. Let’s firstly make equations more friendly
E[x] = (E[x - 1] + E[x + 1]) / 2 + 1
2 * E[x] = E[x - 1] + E[x + 1] + 2
2 * E[x] - E[x - 1] - E[x + 1] = 2
2 * E[x - 1] - E[x - 2] - E[x] = 2
E[x] = 2 * E[x - 1] - E[x - 2] - 2
We define a[], b[] such as E[x] = a[x] * E[1] + b[x].
a[0] = 0; b[0] = 0
a[1] = 1; b[0] = 0
We want now to calculate a[i] and b[i], for each i between 2 and cnt - 1. We get that:
E[x] = a[x] * E[1] + b[x] = 2 * (a[x - 1] * E[1] + b[x - 1]) - (a[x - 2] * E[1] + b[x - 2]) - 2
a[x] * E[1] + b[x] = E[1] * (2 * a[x - 1] - a[x - 2]) + 2 * b[x - 1] - b[x - 2] - 2
We can conclude that:
a[x] = 2 * a[x - 1] - a[x - 2] and
b[x] = 2 * b[x - 1] - b[x - 2] - 2
Having the recurrences we can calculate a[cnt] and b[cnt]. Coming back to the definition of E[0], we get that ((a[cnt] + 1) * E[1] + b[cnt]) / 2 + 1 = 0. We know all of terms in this equation, so we can finally calculate E[1]. Then, we move to E[goal] and calculate it as a[goal] * E[1] + b[goal]. A code implementing this idea is provided on the forum.
Used as: Division One - Level One:
Value | 250 |
Submission Rate | 647 / 748 (86.50%) |
Success Rate | 362 / 647 (55.95%) |
High Score | Petr for 247.15 points (3 mins 3 secs) |
Average Score | 169.94 (for 362 correct submissions) |
For beginning, let’s try to “guess” the minimal length of a square which contains at least K points. Suppose we “guessed” the number L. Can we check if there exist a square of length L covering at least K points? We define function f(L) = 1 if there exist the wanted square of length L, 0 otherwise. We should note that, if f(L) = 1, then f(L + 1) = 1, too (we can simply extend the square of length L, at least K points will still be covered).
So let’s take a look at how f(x) looks like. It is something like 0 0 0 … 0 1 1 1 … . Since the function is monotone, we can use binary search to find minimal x such as f(x) = 1. We take lower bound = 0 and upper bound = a big number and just perform binary search. The remained question is how to build function f(x).
For a fixed length L, we can define a square by its lower left corner (point with minimal x, if tie with minimal y). Suppose this lower left point is (lx, ly), then a point (x, y) is covered by the square if and only if:
lx < x < lx + L
ly < y < ly + L
Given (lx, ly) coordinates, one can check in O(n) time if at least K points are covered by the square, where n = number of points from the input. What if we restrict number of (lx, ly) points that need to be checked? This means, to check only a set of points (lx, ly) and after the check, if we don’t find the wanted square, to be sure the wanted square does not exist. I’ll show that checking O(n ^ 2) points is enough to determine whether there exist the wanted square. The complexity becomes then O(n ^ 3 * log(upperBound)), enough to pass all tests!
Let’s imagine for beginning all lower left corners we’re checking have a constant lx coordinate. What are values of ly that really matter when checking all possible squares? Suppose we iterate ly from the lowest value to the biggest value possible. When going from ly to ly + 1, one of three cases could happen:
Denote by curSet[] set of the points which is contained in the current triangle (ly + 1) and by prevSet[] the set of points which is contained in the previous triangle (ly).
Case 1: curSet[] is the same as prevSet[]
Case 2: curSet[] has less points than prevSet[]
Case 3: curSet[] has some new points which are not in prevSet[]
It’s clear that only case 3 is relevant. So we need to find all points ly for which a new point adds to the square. We iterate all points (x[i], y[i]) and see what’s minimal ly for which square having lower left corner (lx, ly) contains point (x[i], y[i]). The minimal value is needed because we consider our iteration from small values to big ones. If we ignore first inequality, we get
ly < y[i] < ly + L
ly > y[i] - L
As ly needs to be minimal, we choose ly = y[i] - L + 1
So, for a fixed lx, taking all points (lx, y[i] - L + 1) as lower left corners is enough. We can use a similar logic for finding all possible values for lx. For each value of lx, we’ll iterate O(n) coordinates for ly. Now, what values for lx are useful? By the same logic, we iterate lx from the smallest value to the largest. We’re interested in only coordinates that would add at least one new point to the set of covered points, for at least one of its O(n) checked squares. With same logic, we get that all lx points should be at form x[i] - L + 1. So, for each 2 indices i, j, check point with lower left corner (x[i] - L + 1, y[j] - L + 1). There are O(log(upperBound)) checked values for L, for each of them we check O(n ^ 2) lower left corners, for each corner we determine in O(n) if it contains at least K points, so overall complexity is O(n ^ 3 * log(upperBound)).
Used as: Division One - Level Two:
Value | 525 |
Submission Rate | 78 / 748 (10.43%) |
Success Rate | 66 / 78 (84.62%) |
High Score | Egor for 399.84 points (17 mins 3 secs) |
Average Score | 245.01 (for 66 correct submissions) |
Let’s break the problem down. Suppose we only had to color a single cycle of N vertices with K colors, how would we do it?
A good starting point is to choose an arbitrary vertex, let’s call it V, and assign it a color. There are K choices of color for that vertex. Then, going clockwise, we have K-1 choices for the next vertex, and so on. However, there is a problem once we finally get around to the very last vertex. We can’t tell whether there are K-2 choices or K-1 choices for the final vertex, because it depends on whether the second to last vertex was the same color as V, the first vertex.
Hence, we need to take into account whether the vertex we are coloring is the same color is V, or is a different color from V.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
int[][] oneCycle = new int[N + 5][2]; // oneCycle[i][0] = The number of ways to color i-vertices in a path, with the first vertex the same color as the last vertex // oneCycle[i][1] = The number of ways to color i-vertices in a path, with the first vertex a different color as the last vertex oneCycle[1][0] = K; oneCycle[1][1] = 0; for (int i = 2; i <= N + 1; ++i) { // Take anything that didn't end with the same color as the first, and color the vertex same as the first oneCycle[i][0] = oneCycle[i - 1][1]; // If it already was different, and we want it to stay different, we have K-2 choices of color // If it was the same, we have K-1 choices to make it different. oneCycle[i][1] = oneCycle[i - 1][1] * (K - 2) + oneCycle[i - 1][0] * (K - 1); } // Number of ways to color a cycle of N vertices answer = oneCycle[N + 1][0];
I’ve left the part out where you take care to reduce the answers modulo 1,000,000,007.
The runtime for this is only O(N) where N is the maximum number of vertices in a cycle.
Notice that when we’ve selected our vertex V, and start coloring the cycle in one direction, at any point along the way, all that we’ve done is really color a path, and record the number of ways such that the coloring has the same vertex at its ends, or that it doesn’t. This is important to note - coloring cycles is the same as coloring paths with the condition that the coloring of the path begins and ends with the same color.
Now, for the main problem, we have several cycles, up to 50, and they are connected to each other with “dashed” edges. The dashed edges restrict us such that the connected vertices must be the same color. This is very similar to what we observed when trying to color a single cycle - we had to know whether the endpoints of “what we had done so far” were the same color or not.
The idea here is to leverage our knowledge about colorings paths to figure out how to color the entire graph structure.
Start the same way as before - choosing some vertex to start our coloring. This time, though, not all the vertices are the same within a cycle. Up to 2 of the vertices are connected via a dashed edge to the other cycles. It turns out that it is most convenient for our iteration if we select the vertex in cycle C_0 which has a dashed edge connecting it to cycle C_N-1
The vertex we’ve colored in cycle C_0 is toVertex[N-1]. We also are forced to color fromVertex[N-1] in cycle C_N-1 the same color. Once we’ve selected that color, we can, in effect, treat each direction (clockwise/counterclockwise) as if it were coming from a separate vertex. This makes it easier to see how we can apply our knowledge about coloring paths.
The same strategy as before, counting the number of ways to color a path, applies here.
There are 0 ways to begin where the endpoints will be different colors, and K ways to begin where the endpoints will be the same color.
In the example below, Blue is the original color - the color of the very first vertex we’ve selected. Red and Green are examples of colors which are “not the same as the original color”.
Just as with coloring a path where we wanted to count the number of ways to color the path with the endpoints being the same color, and the number of ways to color the path with the endpoints being different colors, we can determine the different situations that result as we unwrap the cycles.
There are 5 different situations that can occur, when deciding the color for the connecting vertex.
Original --> Original (Blue --> Blue)
Original --> Different (Blue --> something that is not Blue)
Different --> Original (something that is not Blue --> Blue)
Different --> Same Different (something that is not Blue --> the same color, that is not Blue)
Different --> Other Different (something that is not Blue --> a different color, still not Blue)
Remember - we’re assuming that the vertex color of the endpoints is already given, and we are counting the number of ways to select the color for the connecting vertex.
Situation (a) occurs in 1 way, since it must be the Original color.
Situation (b) occurs K-1 ways, for the K-1 other colors.
Situation © occurs 1 way, because there is only 1 choice to make it the Original color.
Situation (d) occurs 1 way, because there is only 1 choice to make it the same as what it was before.
Situation (e) occurs K-2 times, because it isn’t the same as the Original, and it is also different from what is already there.
Note that the total number of ways for (a),(b) together is K, and the total number of ways for ©,(d),(e) together is K, which means that we’ve covered all the possibilities - there were K colors available to color the connecting vertex.
The last thing that we need to do is, for each of the situations, count how many ways there are to assign all the vertices in between.
In the Same --> Same situation, which would be (a) and (d), for a path of length A, we could just use oneCycle[A][0]. But the issue with using it directly is that it counts all of the ways for the path to have first and last vertex as the same color. In the current situation, we have already selected the specific color that we want it to be. Due to symmetry in the colorings, this means that the value we are looking for is oneCycle[A][0] / K. And in order to do division when we are dealing with modulus, this means we need to find the inverse of K modulo 1,000,000,007.
In the Same --> Diff situation, which would be (b), ©, and (e), for a path of length A, again, we can’t just use oneCycle[A][1]. Due to symmetry in the colorings, we need to divide by K*(K-1), because we want to have a specific color for the first vertex, and a specific other color for the last vertex. So we need the inverse of K*(K-1) modulo 1,000,000,007 as well to get the correct calculation.
Below is some Java code to implement the described situation.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
int N = vertexCount.length; // number of cycles
int same = K;
int diff = 0;
int MOD = 1000000007;
int inverseK = modularInverse(K, MOD);
int inverseKKm1 = modularInverse(K * (K - 1), MOD);
for (int i = 0; i < N; ++i) {
int prevIdx = (i - 1 + N) % N;
int topLen = vertexCount[i] - Math.abs(fromVertex[i] - toVertex[prevIdx]) + 1; // calculates length of upper path
int bottomLen = Math.abs(fromVertex[i] - toVertex[prevIdx]) + 1; // calculates length of lower path
int nextSame = same * oneCycle[topLen][0] * inverseK * oneCycle[bottomLen][0] * inverseK; // situation (a)
nextSame += diff * oneCycle[topLen][1] * inverseKKm1 * oneCycle[bottomLen][0] * inverseKKm1; // situation (c)
int nextDiff = same * oneCycle[topLen][1] * inverseKKm1 * oneCycle[bottomLen][1] * inverseKKm1 * (K - 1); // situation (b)
nextDiff += diff * oneCycle[topLen][0] * inverseK * oneCycle[bottomLen][0] * inverseK; // situation (d)
nextDiff += diff * oneCycle[topLen][1] * inverseKKm1 * oneCycle[bottomLen][1] * inverseKKm1 * (K - 2); // situation (e)
same = nextSame;
diff = nextDiff;
}
return same;
I’ve left out the implementation of modularInverse, which can in this case be done with either the extended euclidean algorithm, or modular exponentiation. I’ve also left out being careful with modulus inside the loop for the sake of ease of reading.
Lastly, we need to deal with the minor detail of when fromVertex[i] == toVertex[i-1]. In that case, bottomLen = 0, so we get that there are oneCycle[0][0]/K = 1 ways to color the endpoints the same way, which makes sense, and oneCycle[0][1]/K/(K-1) = 0 ways to color the endpoints in different ways. So our recursion is still valid for that case.
The runtime for this portion is O©, where C is the number of cycles. There is also a factor of O(log K) in calculating the inverse of K and K*(K-1) modulo 1,000,000,007. The total runtime is thus O(N + C + logK).
Used as: Division One - Level Three:
Value | 1000 |
Submission Rate | 13 / 748 (1.74%) |
Success Rate | 11 / 13 (84.62%) |
High Score | socketnaut for 901.59 points (9 mins 36 secs) |
Average Score | 635.38 (for 11 correct submissions) |
Let’s solve an equivalent problem, but which will make calculations a lot easier. A path from (0, 0) to (goalX, goalY), in each day we can move from (x, y) to ((x + 1) mod n, y) or (x, (y + 1) mod m) is equivalent to a path from (goalX, goalY) to (0, 0), in each day we can move from (x, y) to ((x - 1) mod n, y) or (x, (y - 1) mod m). Why? Let’s take a possible path: (0, 0) -> (0, 1) -> (1, 1) -> (2, 1). We can think of it in reverse: (2, 1) -> (1, 1) -> (0, 1) -> (0, 0). The difference is: instead of wondering which cells we can reach next day from (x, y), we wonder in which cell we were last day such as in current day we are in (x, y). It’s easy to see that we can come only from ((x - 1) mod n, y) or (x, (y - 1) mod m) last day, such as in current day we are in (x, y).
As a conclusion, we’re interested in expected number of days to reach (0, 0), if we start from cell (x, y) and we can go either to ((x - 1) mod n, y) or (x, (y - 1) mod m). Let’s define E[x][y] = expected number of days to reach (0, 0) if we start from cell (x, y). Obviously, E[0][0] = 0. For other cases, E[x][y] = (E[x - 1][y] + E[x][y - 1]) / 2 + 1 (if this part is not clear, I recommend you to solve D2 1000 first). We can rewrite and get 2 * E[x][y] - E[x - 1][y] - E[x][y - 1] = 2. This is a system of linear equations, which can be solved by Gaussian Elimination. There’s only one problem: Gaussian’s elimination complexity is O(variables ^ 3) and here variables = N ^ 2, so we get complexity O(N ^ 6), too much for our constraints
Happily, we can optimize this method to O(N ^ 3). The key observation is that only variables E[0][0], E[0][1], …, E[0][m - 1], E[1][0], E[2][0], …, E[n - 1][0] need to be added into Gaussian elimination. Any other variable E[x][y] can be written as a combination of previous variables. More exactly, E[x][y] = x0 * E[0][0] + x1 * E[0][1] + … + x(m - 1) * E[0][m - 1] + xm * E[1][0] + x(m + 1) * E[2][0] + … + x(n + m) * E[n - 1][0] + x(n + m + 1). We can proof this property by mathematical induction: suppose it’s true for variables of form E[i][0] or E[0][j] (it’s trivially true, the coefficient of E[i][0], respectively E[0][j] will be 1, rest of coefficients will be 0). Then, let’s see what happens for E[x][y], assuming this is true for E[x - 1][y] and for E[x][y - 1]:
E[x][y] = (E[x - 1][y] + E[x][y - 1]) / 2 + 1
E[x - 1][y] = x0 * E[0][0] + x1 * E[0][1] + … + x(m - 1) * E[0][m - 1] + xm * E[1][0] + x(m + 1) * E[2][0] + … + x(n + m) * E[n - 1][0] + x(n + m + 1)
E[x][y - 1] = y0 * E[0][0] + y1 * E[0][1] + … + y(m - 1) * E[0][m - 1] + ym * E[1][0] + y(m + 1) * E[2][0] + … + y(n + m) * E[n - 1][0] + y(n + m + 1)
we want to proof E[x][y] = z0 * E[0][0] + z1 * E[0][1] + … + x(m - 1) * E[0][m - 1] + zm * E[1][0] + z(m + 1) * E[2][0] + … + z(n + m) * E[n - 1][0] + z(n + m + 1)
Then, z0 = (x0 + y0) / 2, z1 = (x1 + y1) / 2, …, z(n + m) = (x(n + m) + y(n + m)) / 2, z(n + m + 1) = (x(n + m + 1) + y(n + m + 1)) / 2 + 1. This leads to an O(N ^ 3) algorithm to calculate coefficients, for each E[x][y].
We can write now our system of equations, only for variables E[0][0], E[0][1], …, E[0][m - 1], E[1][0], E[2][0], …, E[n - 1][0]. For example, suppose we want to write equation of E[2][0]. It is: 2 * E[2][0] - E[1][0] - E[2][m - 1] = 2. Next, we replace E[2][m - 1] by corresponding coefficients (calculated before) of E[0][0], E[0][1], …, E[0][m - 1], E[1][0], E[2][0], …, E[n - 1][0] and we’re done. Now, variables = m + n - 1, so gaussian elimination runs in proportional with O(n ^ 3). After we get E[0][0], E[0][1], …, E[0][m - 1], E[1][0], E[2][0], …, E[n - 1][0], we can go back and calculate E[goalX][goalY].